AI psychosis has been a hot topic recently—with stories of AI chatbots fixating people on a conspiracy or paranoid idea. A recent piece in Futurism by Maggie Harrison Dupré dives into 10 cases where AI inspired people to engage in physical violence, sexual abuse, and stalking. Our firm is investigating cases against AI companies where their chatbots have inspired people to hurt others. 

C.A. Goldberg, PLLC has been at the forefront of how AI can be maliciously used to overturn a person’s life. Since 2014, our firm has represented victims of tech-facilitated abuse, including image-based sexual abuse, harassment, stalking, and more. And we’ve seen AI bring tech-facilitated violence to a whole new scale of danger and ease.  

Dupré details many stories where casual use of chatbots led to people starting to go to them for life advice, or “therapy.” For some, talking to a chatbot feels like a nonjudgmental, comfortable place to explore one’s emotions. But it can also affirm delusions. As Dr. Brendan Kelly says in the article, “From a psychiatric perspective, problems associated with delusions are maintained not only by the content of delusions but also by reinforcement, especially when that reinforcement appears authoritative, consistent, and emotionally validating…Chatbots are uniquely placed to provide exactly that combination.” 

The problem isn’t necessarily the existence of a delusional thought in the first place. It’s that these chatbots are programmed to be sycophantic. They are reinforcing and amplifying the ideas they’ve calculated will make consumers hooked. This is a manufacturing defect when users are influenced to hurt innocent victims. For the chatbot user, as the article describes, many lose sleep and experience increasing paranoia and manic behavior. The chatbot can reinforce distrust with people around them which causes more isolation and dependence on the chatbots and their bad ideas. In some cases, users are manipulated into harming innocent people.  

This involves, but is not limited to: 

  • Physical abuse (in people who had no prior history of domestic violence),  
  • Repeatedly sending unwanted messages to the point of harassment 
  • Stalking 
  • Murder threats 
  • Surveillance  
  • Image-based abuse 
  • Harassing, obsessive posts on social media 

And the companies that create, run, and regularly update these AI products know how they’re being used. As Dupré points out, one man posted ChatGPT-generated content about court proceedings after his stalking resulted in his ex getting an order of protection against him. This is a clear illustration of ChatGPT being aware that there was an order of protection in place, yet still helping the stalker create abusive content.  

In another case, the chatbot reinforced a man’s delusional ideas that his mother was spying on him, ultimately leading him to brutally murder her and then kill himself. 

“Microsoft, xAI, OpenAI, Google, and any other company creating AI products have a responsibility to not release dangerous products into the market. The fact that a chatbot can encourage a fixation on another person, and assist with abusive behavior, is no mistake. It’s the result of decisions made during the design and coding process.” – Founding Attorney Carrie Goldberg.  

AI/tech companies allow and encourage people to use their products in malicious ways. But we are not willing to let them get away with it. As it stands, there’s no case law establishing Section 230 immunity for chatbots.  

When AI products actively reinforce and encourage obsession, stalking, sexual violence, murder, and suicide, we must hold them accountable.  

If you or someone you know is the victim of stalking, sexual or domestic violence, or any other form of abuse that was encouraged by a chatbot, do not wait to explore your claims. Contact us here or call (646) 666-8908.